Python Job: Technical Lead Data Engineer, Python & SQL (Hybrid

Job added on

Location

Charlotte, North Carolina - United States of America

Job type

Full-Time

Python Job Details

Technical Lead Data Engineer, Python & SQL (Hybrid)

We have an immediate need for a contract-to-hire Technical Lead Data Engineer to join our client, a national chain of convenience stores and gas stations. The Technical Lead Data Engineer will spearhead the design, development, and implementation of data solutions aimed at empowering the organization to derive actionable insights from intricate datasets.

Location: Charlotte, North Carolina OR Tempe, Arizona (Hybrid working 2 days onsite per week)

This job expects to pay about $87 - 91 per hour plus benefits.

What You Will Do:


  • Apply advanced knowledge of Data Engineering principles, methodologies and techniques to design and implement data loading and aggregation frameworks across broad areas of the organization.
  • Gather and process raw, structured, semi-structured and unstructured data using batch and real-time data processing frameworks.
  • Implement and optimize data solutions in enterprise data warehouses and big data repositories, focusing primarily on movement to the cloud.
  • Drive new and enhanced capabilities to Enterprise Data Platform partners to meet the needs of product / engineering / business.
  • Experience building enterprise systems using Databricks, Snowflake, and platforms like Azure, AWS, Google Cloud Platform, etc.
  • Leverage strong Python, Spark, SQL programming skills to construct robust pipelines for efficient data processing and analysis.

What Gets You the Job:


  • Bachelor s or master s degree in computer science, Engineering, or a related field.
  • Proven experience (8+) in a data engineering role, with expertise in designing and building data pipelines, ETL processes, and data warehouses.
  • Strong proficiency in SQL, Python and Spark programming languages.
  • Strong experience with cloud platforms such as AWS, Azure, or Google Cloud Platform is a must.
  • Hands-on experience with big data technologies such as Hadoop, Spark, Kafka, and distributed computing frameworks.
  • Knowledge of data lake and data warehouse solutions, including Databricks, Snowflake, Amazon Redshift, Google BigQuery, Azure Data Factory, Airflow etc.
  • Experience in implementing CI/CD pipelines for automating build, test, and deployment processes.
  • Solid understanding of data modeling concepts, data warehousing architectures, and data management best practices.
  • Excellent communication and leadership skills, with the ability to effectively collaborate with cross-functional teams and drive consensus on technical decisions.
  • Relevant certifications (e.g., Azure, databricks, snowflake) would be a plus.

Irvine Technology Corporation (ITC) is a leading provider of technology and staffing solutions for IT, Security, Engineering, and Interactive Design disciplines servicing startups to enterprise clients, nationally. We pride ourselves in the ability to introduce you to our intimate network of business and technology leaders bringing you opportunity coupled with personal growth, and professional development! Join us. Let us catapult your career!

Irvine Technology Corporation provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Irvine Technology Corporation complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities.